This is probably self explanatory and unnecessary, but I think it bears mentioning.
I enjoy judging artists off of a fairly comprehensive understanding of everything they’ve published. The hitch is that nobody is perfectly consistent, and so you’ll get people who suck but get better, you’ll get people who are better and then suck, and, well, yeah, you’ll get people who are pretty consistent. The question is, for those whose quality varies, do I take an average, take the peak, take the low? Which is the most accurate, which is the most fair, and if those aren’t the same which do I prioritize?
I’ve settled on looking at a rough average of an artists work, with some variation as I see fit. If somebody is perfectly consistent, then easy. No issues, I rate them on the quality of anything and everything they put out, because it’s all the same. Yay reliability.
If somebody releases a great first album and then deteriorates, I’ll likely not give them a straight average, but instead deduct a bit from their “actual” average. How can I trust somebody who is getting worse? They had the good first album, yeah, but subsequent music makes me suspect that that first album may have been a fluke. Or even were it not a fluke, they moved away from what made me like them, so I don’t have reason to like them anymore. John Mayer and Jason Mraz are pretty good examples of this- I like both of their early stuff really quite a lot, but as they’ve gotten older and kept releasing music I’ve stopped caring about them. They’ve lost my trust, they’ve gotten boring, lost whatever made them try (and succeed in) really interesting, fun stuff when they were younger. If I rate somebody an A+, it means both that I think what they’ve done thus far is about A+ level and that upon their release of new music I expect to find something of A+ quality. I deduct points for artists who have music at A+ level but make me worry for something worse next time.
The other half of this is, of course, that I give points to artists who have music at A+ level but make me hope for something better next time. This doesn’t often reflect itself as a straight rank modifier, but instead as a general increase in likeability, where when I think of the artist I remember their most recent thing. Upstate is a band like that- I like their first album okay, but I love their second album, so I assume the value of the second album is more representative of them.
The third half is regarding artists that haven’t given me enough information either way, artists with only one released album, or maybe with only a short EP or two. If Lovejoy or Arlo Parks release a second album that’s as good as their first, either one would earn a bump up a rank almost immediately. If I only have a single piece of evidence, I have to hedge whatever feelings I have about it.
All this to say, trust is a factor. In my preface for The List, I distinguished A tier and B tier by artists I like vs and artists who have made music I like. That factor is the biggest, easiest thing to use in separating those tiers, but it’s present in all my ratings. When you publish anything, including music (duh), you create a standard and a brand for yourself, and meeting or exceeding it is part of the job. If you don’t, you done goofed, and I think it would be bizarre if I didn’t like you less for it.
But yeah, that’s about it. When I rank an artist I’m trying to use their music to rate them, so I watch for trends and I watch for consistency. I’m watching to see if the artist is dependable, because for me to say that they’re good they need to keep being good.
I don’t know how necessary this is to write down. I’d think the idea is just so clearly implied, but I figure there’s value in being thorough. I suppose I can always delete this later anyway, so no biggie either way.