[HN Gopher] International Scientific Report on the Safety of Adv...
       ___________________________________________________________________
        
       International Scientific Report on the Safety of Advanced AI [pdf]
        
       Author : jdkee
       Score  : 36 points
       Date   : 2024-05-18 17:16 UTC (5 hours ago)
        
 (HTM) web link (assets.publishing.service.gov.uk)
 (TXT) w3m dump (assets.publishing.service.gov.uk)
        
       | joebob42 wrote:
       | How would they know? We don't have general AI but it's written as
       | if they already know what it will be, how safe it will be, etc.
       | 
       | I think it's an important topic to discuss and consider, but this
       | seems to be speaking with more knowledge and authority than seems
       | reasonable to me.
        
         | heyitsguay wrote:
         | From looking at the summary, I think it's a bit more measured
         | than this statement implies. They talk about concrete risks of
         | spam, scams, and deepfakes. They then go into possible future
         | harms but couched in language of "experts are uncertain if this
         | is possible or likely" etc.
        
         | mikpanko wrote:
         | I believe by "general-purpose AI" the report doesn't mean AGI.
        
           | ben_w wrote:
           | Which is one of those cases where I briefly want to reject
           | linguistic descriptivism because to me the "G" in "AGI" is
           | precisely "general".
           | 
           | But then I laugh at myself, because words shift and you have
           | to roll with the changes.
           | 
           | But do be aware that this shift of meanings is not
           | universally acknowledged let alone accepted -- there's at
           | least half a dozen different meanings to the term "AGI",
           | except one of them requires "consciousness" and there's loads
           | of different meanings of that too.
        
         | api wrote:
         | You've hit on a giant thing that bothers me about this
         | discourse: endless rationalistic discourse about systems and
         | phenomena that we have no experience with at all, not even
         | analogous experience.
         | 
         | This is not like the atomic bomb. We had tons of experience
         | with big bombs. We just knew atom bombs if they worked could
         | make orders of magnitude larger booms. The implications if real
         | big bombs could be reasoned about with some basis in reality.
         | 
         | It wasn't reasoning about wholly unknown types of things that
         | no human being has ever encountered or interacted with.
         | 
         | This is like a panel on protocols for extraterrestrial contact.
         | It'd be fine to do that kind of exercise academically but these
         | people are talking about passing actual laws and regulations on
         | the basis of reasoning in a vacuum.
         | 
         | We are going to end up with laws and regulations that will be
         | simultaneously too restrictive to human endeavor and
         | ineffective at preventing negative outcomes if this stuff ever
         | manifests for real.
        
           | FeepingCreature wrote:
           | I mean, we have an entire biosphere, us included, for samples
           | of entities of varying intelligence.
        
             | api wrote:
             | There are so many differences here vs organisms shaped by
             | evolution and involved in a food web with each other. This
             | is much closer to space aliens or beings from another
             | dimension.
             | 
             | If there are huge risks here they are probably not the ones
             | we are worried about.
             | 
             | Personally one of my biggest worries with both sentient AI
             | and aliens is how humans might react and what we might do
             | to each other or ourselves out of fear and paranoia.
        
           | ben_w wrote:
           | > This is not like the atomic bomb. We had tons of experience
           | with big bombs. We just knew atom bombs if they worked could
           | make orders of magnitude larger booms. The implications if
           | real big bombs could be reasoned about with some basis in
           | reality.
           | 
           | Well, we thought we did.
           | 
           | We really didn't fully appreciate the impact of the fallout
           | until we saw it; and Castle Bravo was much bigger than
           | expected because we didn't know what we were doing; and the
           | demon core; and the cold war arms race...
           | 
           | But yeah, my mental framing for this is a rerun of the first
           | stage of the industrial revolution, and it took quite a lot
           | of harm for what is now basic workplace health and safety
           | such as "don't use children to remove things from heavy
           | machinery while it's running", and we're likely to have
           | something that's equally dumb happen even in the relatively
           | good possible futures that don't have paperclip maximisers or
           | malicious humans using AI for evil.
        
         | boesboes wrote:
         | general purpose AI != AGI
         | 
         | They just means 'not trained for exactly one tasks', i.e LLMs
         | and such and not AlhpaFold.
        
           | joebob42 wrote:
           | This makes more sense, thank you. I hadn't picked up on the
           | distinction, but I agree that's more reasonable.
           | 
           | I still think we don't really know; it's developing
           | technology and it's changing so fast that it seems like it's
           | probably too early for experts on practical applications to
           | exist and claim they know the impact it will have.
        
         | tkwa wrote:
         | It seems fine to me. When there is evidence for a certain type
         | of current or future harm they present it, and when there is
         | not they express uncertainty.
         | 
         | Can AI enable phishing? "Research has found that between
         | January to February 2023, there was a 135% increase in 'novel
         | social engineering attacks' in a sample of email accounts
         | (343*), which is thought to correspond to the widespread
         | adoption of ChatGPT."
         | 
         | Can AIs make bioweapons? "General-purpose AI systems for
         | biological uses do not present a clear current threat, and
         | future threats are hard to assess and rule out."
        
       | ninetyninenine wrote:
       | The biggest risk of ai can be simply characterized by one
       | concept:
       | 
       | This very report could have been generated by AI. We can't fully
       | tell.
        
         | warkdarrior wrote:
         | So what if it was generated by AI? Does that invalidate its
         | contents in any way?
        
           | euroderf wrote:
           | Interesting point. Are we already at a state where an A.I.
           | could respond to email, phone calls, and even video calls in
           | a convincing way ?
        
             | ben_w wrote:
             | Email definitely, just have to remember to fine tune so it
             | says "sure, I'll get on that after lunch" rather than "as a
             | language model...".
             | 
             | Voice calls, yes: I attended a talk last summer where
             | someone did that with an AI trained on their own voice so
             | they didn't have to waste time on whatsapp voice messages.
             | The interlocutors not only didn't notice, they actively
             | didn't believe it was AI when he told them (until he showed
             | them the details).
             | 
             | Video... I don't think so? But that's due to latency and
             | speed, and I'm basing my doubt on diffusion models which
             | may be the wrong tool for the job.
        
           | ninetyninenine wrote:
           | No it does not.
           | 
           | This is the scary part.
        
             | semi-extrinsic wrote:
             | Do you mean scary as in "it is scary that apparently
             | intelligent and sane people are wasting time and money on
             | producing documents that consist purely of meaningless
             | fluff"? If so, I agree.
        
         | panagathon wrote:
         | Sure you can. You simply email one of the listed authors and
         | ask them if the document is legit.
        
           | jonas21 wrote:
           | And if they say "yes, it is legit", what does that tell you?
        
             | readyman wrote:
             | That the author has risked their reputation on the claim.
             | If you're doubting the author is legit, interrogate their
             | professional associations with an internet search, relying
             | on the domain name system.
             | 
             | Nothing about any of this is new or profound. Counterfeit
             | documents have been around for hundreds of years.
        
               | swores wrote:
               | The question you replied to wasn't "why should you
               | believe someone who says they are behind a piece of
               | research", it was about the usefulness of receiving an
               | email saying it.
               | 
               | Their point (I assume) was that it would be illogical to
               | worry that the report might be written and released by AI
               | yet consider an email response as evidence against it.
               | 
               | If AI can create and release this report it can also
               | hijack a real person's email or create a fake persona
               | that pretends to be a real person.
        
               | ben_w wrote:
               | Three people make a tiger[0], and even current LLMs are
               | good at pretending to be a crowd.
               | 
               | [0] https://en.wikipedia.org/wiki/Three_men_make_a_tiger
        
             | StableAlkyne wrote:
             | There's a certain point where this line of thought just
             | becomes an AI-themed rehash of "Trusting Trust" by Thompson
        
         | 123yawaworht456 wrote:
         | it could also be written by a communist/terrorist/nazi/russian
         | in Notepad on Windows XP. you can't fully tell.
        
           | ninetyninenine wrote:
           | We can't. The point is 2 years ago between ai and a human, we
           | 100 percent could tell.
           | 
           | Now we can't. In the future it will be even harder.
        
       | thegrim33 wrote:
       | "At a time of unprecedented progress in AI development, this
       | first publication restricts its focus to a type of AI that has
       | advanced particularly rapidly in recent years: General-purpose
       | AI, or AI that can perform a wide variety of tasks"
       | 
       | Is "general-purpose AI" supposed to be something different from
       | "AGI" or from "General Artificial Intelligence"? Or is it yet
       | another ambiguous ill-defined term that means something different
       | to every person? How many terms do we need?
       | 
       | It's funny that they claim that "general purpose AI" has
       | "advanced particularly rapidly" even though they didn't, nor
       | can't, define what it even is. They have a glossary of terms at
       | the end, but don't bother to have an entry to define general
       | purpose AI, which the entire report is about. They closest thing
       | they include for defining the term is "AI that can perform a wide
       | variety of tasks".
        
         | krisoft wrote:
         | > It's funny that they claim that "general purpose AI" has
         | "advanced particularly rapidly" even though they didn't, nor
         | can't, define what it even is.
         | 
         | I'm confused about what you are missing. You are quoting their
         | definition. "General-purpose AI, or AI that can perform a wide
         | variety of tasks". That's their definition. You might not like
         | it but that is a definition.
        
           | GPerson wrote:
           | A corporation, a question and answer website, a Minecraft
           | game, and basically any sufficiently complex system could all
           | be General-purpose AIs by that definition though. Having an
           | overly general and useless definition is not better than
           | having no definition in my opinion, so I see where the OP is
           | coming from.
        
             | FeepingCreature wrote:
             | I have seen people call corporations AIs. This is not
             | obviously wrong.
        
             | jprete wrote:
             | It's general-purpose AI as AI is commonly understood, which
             | is probably specific and useful enough for such a report.
        
         | zarzavat wrote:
         | AlphaFold is special-purpose AI. GPT-4o is general-purpose AI.
         | Seems clear to me.
         | 
         | "AGI" is a specific term of art that is a subset of general-
         | purpose AI.
         | 
         | AGI would have the capability to learn and reason at the same
         | level as the most capable humans, and therefore perform any
         | intellectual task that a human can.
        
       | hiAndrewQuinn wrote:
       | Recursively self-improving AI, of the kind Nick Bostrom outlined
       | in detail way back in his 2014 book _Superintelligence_ and Dr.
       | Omohundro outlined in brief in [1], is the only kind which poses
       | a true existential threat. I don 't get out of bed for people
       | worrying about anything less when it comes to 'AI safety'.
       | 
       | On the topic: One potentially effective approach to stopping
       | recursive self-improving AI from being developed is a private
       | fine-based bounty system against those performing AI research in
       | general. A simple example would be "if you care caught doing AI
       | research, you are to pay the people who caught you 10 times your
       | yearly total comp in cash." Such a program would incur minimal
       | policing costs and could easily scale to handle international
       | threats. See [2] for a brief description.
       | 
       | If anyone wants to help me get into an econ PhD program or
       | something where I could investigate this general class of bounty-
       | driven incentives, feel free to drop me a line. I think it's
       | really cool, but I don't know much about how PhDs work. There's
       | actually nothing special about AI in regards to this approach, it
       | could be applied to anything we fear might end up being a black-
       | urn technology [3].
       | 
       | [1]: https://selfawaresystems.com/wp-
       | content/uploads/2008/01/ai_d...
       | 
       | [2]:
       | https://web.archive.org/web/20220709091749/https://virtual-i...
       | 
       | [3]: https://nickbostrom.com/papers/vulnerable.pdf
        
         | StableAlkyne wrote:
         | > Such a program would incur minimal policing costs and could
         | easily scale to handle international threats
         | 
         | Who ensures every country enforces the ban?
         | 
         | How do you ensure international cooperation against a nation
         | that decides to ignore it, against whom sanctions have no
         | effect? What if that nation is a nuclear power?
        
         | ben_w wrote:
         | > is the only kind which poses a true existential threat
         | 
         | You don't accept the possibility that a non-improving tool of
         | an AI system that is fixed at the level of "just got a PhD in
         | everything" by reading all the research papers on arxiv, might
         | possibly be advanced enough for a Jim Jones type figure to
         | design and create a humanity-ending plague because they believe
         | in bringing about the end times?
        
           | kbenson wrote:
           | Wouldn't there be 100x more of the same capability looking
           | for threats and trying to head them off? A very advanced tool
           | is still just a tool, and subject to countermeasures.
           | 
           | I can see why countries would want to regulate it, but
           | personally I think it's a distinctly different category than
           | what the GP comment was talking about.
           | 
           | There is no stopping a singularity level event after it's
           | begun, at least not by any process where people play a role.
        
             | ben_w wrote:
             | > Wouldn't there be 100x more of the same capability
             | looking for threats and trying to head them off?
             | 
             | Hard to determine.
             | 
             | It's fairly easy to put absolutely everyone under 24/7
             | surveillance. Not only does almost everyone carry a phone,
             | but also laser microphones are cheap and simple, and WiFi
             | can be used as wall penetrating radar capable of pose
             | detection at sufficient detail for heart rate and breath
             | rate sensing.
             | 
             | But people don't like it when they get spied on, it's
             | unconstitutional etc.
             | 
             | And we're currently living through a much lower risk arms
             | race of the same general description with automatic code
             | analysis to find vulnerabilities before attackers exploit
             | them, and yet this isn't always a win for the defenders.
             | 
             | Biology is not well-engineered code, but we have had to
             | evolve a general purpose anti-virus system, so while I do
             | expect attackers to have huge advantages, I have no idea if
             | I'm right to think that, nor do I know how big an advantage
             | in the event that I am right at all.
             | 
             | > There is no stopping a singularity level event after it's
             | begun, at least not by any process where people play a role
             | 
             | Mm, though I would caution that singularities in models is
             | a sign the model is wrong: to simplify to IQ (a flawed
             | metric) an AI that makes itself smarter may stop at any
             | point because it can't figure out the next step, and that
             | may be an IQ 85 AI that can only imagine reading more
             | stuff, or an IQ 115 AI that knows it wants more compute so
             | it starts a business that just isn't very successful, or it
             | might be IQ 185 and do all kinds of interesting things but
             | still not know how to make the next step any more than the
             | 80 humans smarter than it, or it might be IQ 250 and beat
             | every human that has ever lived (IQ 250 is 10s, beating 10s
             | is p [?] 7.62e-24, and one way or another when there have
             | been that many humans, they're probably no longer
             | meaningfully human) but still not know what to do next.
             | 
             | I prefer to think of it as an event horizon: beyond this
             | point (in time), you can't even make a reasonable attempt
             | at predicting the future.
             | 
             | For me, this puts it at around 2030 or so, and has done for
             | the last 15 years. Too many exponentials start to imply
             | weird stuff around then, even if the weird stuff is simply
             | "be revealed as a secret sigmoid all along".
        
           | impossiblefork wrote:
           | I believe that plagues are really easy to make, and that
           | making them is already accessible to most PhDs in
           | biomedicine, many PhD students and some particularly talented
           | high school students.
           | 
           | Most of skills are I believe, laboratory and biology
           | experiment debugging skills rather than something like
           | deductive intelligence.
           | 
           | The easiest things aren't actually 'plagues' as such, and the
           | high school accessible approaches would require access to
           | being able to order things from DNA synthesis labs. I think
           | it's accessible to an incredible number of people, basically
           | everyone I know who does biology. None of them would ever
           | think about doing this though, and if I asked them they would
           | probably not know how to do it, because they'd never direct
           | their thought in that direction.
           | 
           | I think the real concern with an AI system that is like
           | someone with a PhD in everything should rather be that it'll
           | be really hard to get a job if that kind of thing is
           | available. It'll give enormous power to land and capital
           | owners. That is something which is really dangerous enough
           | though.
        
         | Simon_ORourke wrote:
         | Try to get that bounty enforced in the courts! All it'll take
         | is one loophole to what would surely be a few vague poorly
         | drafted laws and the floodgates open again.
        
         | mrshadowgoose wrote:
         | In your mind, what will our world look like once AGI is
         | achieved, and that technology will likely be exclusively in the
         | hands of governments and large corporations? What guarantees do
         | we have that it will be used for the benefit of the common
         | person? What will happen to the large swathes of the human
         | population that will not only be permanently economically
         | useless, but worse than useless? They'll need to be fed,
         | clothed, housed and entertained with the only things they
         | provide back being their opinions and complaints.
         | 
         | I literally couldn't care less about recursively-improving
         | superintelligence. AGI in the hands of the rich and powerful is
         | already a nightmare scenario.
        
         | johndough wrote:
         | > A simple example would be "if you care caught doing AI
         | research, you are to pay the people who caught you 10 times
         | your yearly total comp in cash."
         | 
         | That sounds like a prime example of perverse incentive:
         | https://en.wikipedia.org/wiki/Perverse_incentive
         | 
         | > A perverse incentive is an incentive that has an unintended
         | and undesirable result that is contrary to the intentions of
         | its designers.
         | 
         | For example, the British government offering a bounty on dead
         | cobra snakes lead to a large number of cobra breeders.
         | 
         | Your proposal is conceptionally similar to the Alberta Child,
         | Youth and Family Enhancement Act (fourth bullet point):
         | https://en.wikipedia.org/wiki/Perverse_incentive#Community_s...
        
         | dragonwriter wrote:
         | > Such a program would incur minimal policing costs
         | 
         | No, it wouldn't. Just because the policing costs aren't tax-
         | funded doesn't mean they don't exist. (And I'm not talking just
         | about costs voluntarily incurred by bounty seekers, I'm also
         | talking about the cost that the system imposes involuntarily on
         | others who are neither actually guilty nor bounty seekers,
         | because the financial incentives can motivate pursuits imposing
         | costs on targets who are not factually guilty.)
         | 
         | > and could easily scale to handle international threats.
         | 
         | No, it couldn't, especially for things where a major source of
         | the international threat is governmental, where private
         | bounties aren't going to work at all.
         | 
         | And it especially can't work against _developing technology_ ,
         | where the evidence needed in litigation and the litigation
         | would itself be a vector for spreading the knowledge that you
         | are attempting to repress.
        
       | wslh wrote:
       | Clearly AI is unstoppable as it is math. I don't get how
       | politicians or intelectuals continue to argue without
       | understanding that simple truth. I don't know if AI will be
       | comparable to human or organizational intelligence but know that
       | is unstoppable.
       | 
       | Just ranting but a potential way of defending about it is taking
       | an approach similar to cryptography for quantum computers: think
       | harder if we can excel on something even assuming (some) AI is
       | there.
        
         | hollerith wrote:
         | >Clearly AI is unstoppable as it is math.
         | 
         | I don't understand this argument. It takes _years_ to become
         | competent at the math needed for AI. If stopping AI is
         | important enough, society will make teaching, learning and
         | disseminating writings about that math illegal. After that,
         | almost no one is going to invest the years needed to master the
         | math because it no longer helps anyone advance in their
         | careers, and the vast majority of the effect of math on society
         | is caused by people who learned the math because they expected
         | it to advance their career.
        
           | wslh wrote:
           | It is very simple: powerful governments tried to stop
           | cryptography, we know what happened. Also governments tried
           | to prohibit alcohol, etc. it does not work. You can get them
           | even in places such as Saudi Arabia. Are they expensive? For
           | sure, but when it is about science that you can run in your
           | own computers nothing can stop it. Will they put a Clipper
           | chip?
        
             | hollerith wrote:
             | The difference between "stop AI" and "stop cryptography" is
             | that those of us who want to stop AI want to stop AI models
             | from becoming more powerful by stopping future mathematical
             | discoveries in the field. In contrast, the people trying to
             | stop cryptography were trying to stop the dissemination of
             | math that had already been discovered and understood well
             | enough to have been productized in the form of software.
             | 
             | Western society made a decision in the 1970s to stop human
             | germ-line engineering and cloning of humans, and so far
             | those things have indeed been stopped not only in the West,
             | but worldwide. They've been stopped because no one
             | currently knows of an effective way to, e.g., add a new
             | gene to a human embryo. I mean that (unlike the situation
             | in cryptography) there is no "readily-available solution"
             | that enables it to be done without a lengthy and expensive
             | research effort. And the reason for that lack of
             | availability of a "readily-available solution" is the fact
             | that no young scientists or apprentice scientists have been
             | working on such a solution -- because every scientist and
             | apprentice scientist understood and understands that
             | spending any significant time on it would be a bad career
             | move.
             | 
             | Those of us who want to stop AI don't care you you run
             | LLama on your 4090 at home. We don't even care if ChatGPT,
             | etc, remain available to everyone. We don't care because
             | LLama and ChatGPT have been deployed long enough and in
             | enough diverse situations that if any of them were
             | dangerous, the harm would have occurred by now. We do want
             | to stop people from devoting their careers to looking for
             | new insights that would enable more powerful AI models.
        
               | wslh wrote:
               | Well, in my book that is call obscurantism and never
               | worked for long. It would be the first time that
               | something like this works forever in humanity. I think
               | once the genius is outside the bottle you cannot close
               | him again.
               | 
               | If I take the science fiction route I would say that
               | humans in your position should think about moving to
               | another planet and create military defenses against AI.
        
               | wholinator2 wrote:
               | There's several assumptions you're making. First, that
               | sufficient pressure will be built up into stopping AI
               | before drastic harms occur instead of after, at which
               | point stopping the math will be exactly the same as was
               | stopping cryptography.
               | 
               | And that should there be no obvious short term harms to a
               | technology, there can be no long term harms. I don't
               | think it's self evident that all the harms would've
               | already occurred. Surely humanity has not yet reached
               | every type and degree of integration with current
               | technology possible.
        
           | wyldfire wrote:
           | > society will make teaching, learning and publishing about
           | that math illegal
           | 
           | If there were ever a candidate for Poe's Law comment of the
           | year, this comment on HN would be it.
           | 
           | So much literature depicts just such a dystopia where the
           | technology unleashes humanity's worst and they decide to
           | prevent education in order to avoid the fate of the
           | previously fallen empire.
        
           | johndough wrote:
           | > It takes years to become competent at the math needed for
           | AI
           | 
           | (Assuming that "AI" refers to large language models)
           | 
           | The best open source LLM fits in less than 300 lines of code
           | and consists mostly of matrix multiplications.
           | https://github.com/meta-
           | llama/llama3/blob/main/llama/model.p...
           | 
           | Anyone with a basic grasp of linear algebra can probably
           | learn to understand it in a week. Here is a video playlist by
           | former Stanford professor and OpenAI employee Andrej Karpathy
           | which should cover most of it (less than 16 hours total): htt
           | ps://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxb...
        
       | consumer451 wrote:
       | I understand that this report is not really about AGI, but I
       | would like to again raise my main concern: I am much more worried
       | about the implications of the real threat of dumb humans using
       | dumb "AI" in the near-term, than I am about the theoretical
       | threat of AGI.
       | 
       | Example:
       | 
       | https://news.ycombinator.com/item?id=39944826
       | 
       | https://news.ycombinator.com/item?id=39918245
        
         | tkwa wrote:
         | This is like someone saying "I am much more worried about the
         | implications of dumb humans using flintlock muskets in the near
         | term, then I am about the theoretical threat of machine guns
         | and nuclear weapons." Surely the potential for both misuse and
         | mistakes goes up the more powerful the technology gets.
        
           | consumer451 wrote:
           | That's fair, but to keep going with the analogy: we are
           | currently the Native Americans in the 1500's, and the
           | Conquistadors are coming ashore with their flintlocks (ML).
           | Should we be more worried about them, or the future B-2
           | bombers, each armed with sixteen B83 nukes (AGI)?
           | 
           | I understand that the timeline may be exponentially more
           | compressed in our modern case, but should we ignore the
           | immediate problem?
           | 
           | In this analogy, the flintlocks could be actual ML-powered
           | murder bots, or just ML-powered economic kill bots, both
           | fully controlled by humans.
           | 
           | The flintlocks enable the already powerful to further
           | consolidate their power, to the great detriment of the less
           | powerful. No super AGI is necessary, it just takes a large
           | handful of human Conquistador sociopaths with >1,000x
           | "productivity" gains, to erase our culture.
           | 
           | I don't understand how we could ever get to the point of
           | handling the future B-2 nuke problem, as a civilization,
           | without first figuring out how to properly share the benefits
           | of the flintlock.
        
       | blackeyeblitzar wrote:
       | I feel like a lot of the arguments listed in "4.1.2
       | Disinformation and manipulation of public opinion" apply in a non
       | AI world. In social media, sites like Hacker News are rare. In
       | most places you'll see comments that are (purposely) sharing a
       | subset of the whole truth in order to push a certain opinion.
       | From my perspective disinformation and manipulation of public
       | opinion already happens "at scale". Most social media users
       | belong to one political side or the other, and there are lots of
       | them, and almost all of them are either factually incorrect or
       | lack nuance or an understanding of their opponents' view.
        
       ___________________________________________________________________
       (page generated 2024-05-18 23:02 UTC)