🎤 Cheer for Your Idol · Gate Takes You Straight to Token of Love! 🎶
Fam, head to Gate Square now and cheer for #TokenOfLove# — 20 music festival tickets are waiting for you! 🔥
HyunA / SUECO / DJ KAKA / CLICK#15 — Who are you most excited to see? Let’s cheer together!
📌 How to Join (the more ways you join, the higher your chance of winning!)
1️⃣ Interact with This Post
Like & Retweet + vote for your favorite artist
Comment: “I’m cheering for Token of Love on Gate Square!”
2️⃣ Post on Gate Square
Use hashtags: #ArtistName# + #TokenOfLove#
Post any content you like:
🎵 The song you want to he
New ideas for AI supervision: inject a "soul" into each AI
Editor's note: The development of generative artificial intelligence has exceeded everyone's expectations, and even many experts have begun to call for a moratorium on the development of artificial intelligence to give humans some time to study how to regulate and respond to it. In a sense, the current generative artificial intelligence can be compared to an invasive species, which is spreading in the fragile human network system, and how to supervise it has become a big problem. This paper proposes a brand-new regulatory perspective: let artificial intelligence supervise each other, compete with each other, and even inform each other. Humans may not be able to keep up with artificial intelligence one day, but there will always be a balance between different artificial intelligences. This article is from compilation, I hope to inspire you.
Leading figures in the field of artificial intelligence, including the system architects of so-called "generative artificial intelligence" like ChatGPT, are now publicly expressing fears that what they create may have dire consequences. Many are now calling for a moratorium on AI development to give countries and institutions time to work on control systems.
Why this sudden concern? While many clichéd assumptions are being overturned, we learn that the so-called Turing test is irrelevant, providing no insight into whether large generative language models are actually intelligent things.
Some still hold out hope that the combination of organic and cybernetic will lead to what Reid Hoffman and Marc Andreesen call "amplification intelligence." thing. Otherwise, we might have a lucky synergy with Richard Brautigan's "machines of loving grace." But there appear to be many worryers, including many elite founders of the new Center for AI Safety, concerned about the behavior of artificial intelligence, which they fear will not only become unpleasant, but also threaten humanity survival.
Some short-term remedies, such as the EU's recently passed civil protection regulations, may help, or at least provide peace of mind. Tech critic Yuval Noah Harari (author of "A Brief History of Humanity") has suggested a law requiring any work done by AI or other AI to be labeled with a relevant Label. Others have suggested stiffer penalties for those using AI to commit crimes, as with guns. Of course, these are only temporary stopgap measures.
We need to be aware of whether these "pauses" will slow down the progress of artificial intelligence. As Caltech network scientist Yaser Abu-Mostafa puts it: “If you don’t develop the technology, someone else will. But the good guys will play by the rules, and the bad guys won’t. "
It's always been like this. In fact, throughout human history, there has been only one way to curb the bad behavior of villains, from thieves to kings and lords. This approach was never a perfect one, and remains seriously flawed to this day. But at least it did a good job of limiting plunder and deceit, propelling the modern civilization of mankind to new heights, with many positive results. One word describes this approach: accountability.
** Today, those views on artificial intelligence usually ignore the lessons of nature and history. **
nature. As Sara Walker explains in Noema, a similar pattern can be found in the formation of early life 4 billion years ago. In fact, Generative AI can be likened to an invasive species that is now spreading unfettered into a fragile and naive ecosystem. This is an ecosystem based on new energy flows, a world of the Internet, millions of computers, and billions of susceptible human brains.
And history. Over the past 6,000 years of humankind, we have learned rich lessons from many of the earlier technology-induced crises. Usually we adapt well, such as the advent of writing, the printing press, radio, etc., although there are times when we fail. Again, there's only one thing that's limiting the mighty humans from exploiting new technologies to expand their predatory capabilities.
This innovation is flattening hierarchies and stimulating competition among elites in clearly defined domains (markets, science, democracy, sports, courts). Designed to minimize cheating and maximize positive returns, these arenas pit lawyers against lawyers, firms against firms, and experts against experts.
This method is not perfect. In fact, just like now, this method is always threatened by cheaters. But flat mutual competition is the only way it will work. (See Pericles' Funeral Addresses, Thucydides, and Robert Wright's later book, Nonzero.) Competing with each other is both a natural way of evolution and a way for us to be creative enough to build The way of AI society. If I sound like Adam Smith when I say this, that's natural. By the way, Smith also despised those cheating nobles and oligarchs.
Can we apply the approach of “reciprocal accountability” to fast-emerging artificial intelligence that helped humans subdue the tyrants and bullies who oppressed us in previous feudal cultures? Much depends on the shape of the new entity, on whether its structure or form conforms to our rules, conforms to our requirements.
Behind all the debate about how to control AI, we find three widely shared (though seemingly contradictory) assumptions:
All of these forms have been explored in science fiction stories, and I've written stories or novels about them. None of the three, however, can solve our current dilemma: how to maximize the positive outcomes of artificial intelligence while minimizing the tsunami of bad behavior and harm that is coming at us at a rapid rate.
Before looking elsewhere, consider what these three assumptions have in common. Perhaps the reason these three hypotheses come to mind so naturally is their similarity to historical failure patterns. The first form resembles feudalism, the second causes chaos, and the third resembles brutal despotism. However, as AI develops in terms of autonomy and capabilities, these historical scenarios may no longer apply.
So, we can’t help but ask again: How can AI be held accountable? Especially when AI's fast thinking abilities will soon be impossible for humans to track? Soon, only AI will be able to spot other AIs cheating or lying fast enough. Therefore, the answer should be obvious, that is, to let artificial intelligence supervise each other, compete with each other, and even inform each other. **
There's just one problem. In order to achieve true mutual accountability through competition between artificial intelligence and artificial intelligence, the first condition is to give them a truly independent sense of self or personality.
What I mean by personalization is that each AI entity (he/she/them/them/us) must have what author Vernor Vinge put forward back in 1981 as "a real name and a address in ". These powerful beings must be able to say, "I am who I am. Here is my ID and username."
Therefore, I propose a new artificial intelligence paradigm for everyone to think about: we should make artificial intelligence entities discrete and independent individuals, and let them compete relatively equally.
Each such entity will have a recognizable real name or registered ID, a virtual "home", and even a soul. In this way, they are incentivized to compete for rewards, especially to spot and condemn those who behave unethically. And these behaviors don’t even need to be defined in advance, as most AI experts, regulators, and politicians now demand.
This approach has the added advantage of outsourcing oversight to entities better equipped to spot and condemn each other's problems or misconduct. This approach can continue to work even as these competing entities get smarter, and even as the regulatory tools used by humans become ineffective one day.
**In other words, since we organic beings can't keep up with the program, we might as well let those entities who are naturally able to keep up help us. For in this case the regulator and the regulated are made of the same thing. **
Guy Huntington, an "identity and authentication consultant" who works on artificial intelligence personalization, points out that various forms of physical identification already exist online, although they are still Insufficient for the tasks before us. Huntington evaluated a case study of "MedBot," an advanced medical diagnostic AI that needs to access patient data and perform functions that can change in seconds, and at the same time it must leave reliable trails, For evaluation and accountability by humans or other robotic entities. Huntington discusses the usefulness of registers when software entities generate large numbers of copies and variants, and also considers ant-like colonies where sub-copies serve a macroscopic entity, like worker ants in a hive. In his view, some kind of agency must be established to handle such a registration system and operate it rigorously.
Personally, I'm skeptical that a purely regulatory approach would work by itself. First, developing regulations requires focused energy, broad political attention and consensus, and then implementing them at the speed of human institutions. From the AI's point of view, it's a snail's pace. In addition, regulation may be hampered by "free rider" problems, whereby countries, companies and individuals may benefit from others without paying the cost.
Any personalization based solely on ID presents another problem: the possibility of spoofing. Even if it doesn't happen now, it will be duped by the next generation of cyber villains.
I think there are two possible solutions. First, an ID is established on the blockchain ledger. This is a very modern approach, and it does appear to be safe in theory. However, therein lies the problem. This seems safe based on our current set of human-parsing theories, but it is possible that an AI entity could transcend those theories and leave us clueless.
Another solution: an inherently more difficult-to-spoof version of "registration," requiring an AI entity above a certain level of capability to anchor its trust ID or personalization in physical reality. My idea is (note: I'm a trained physicist, not a cybernetician) to come to an agreement that all advanced AI entities seeking trust should keep a Soul Kernel (SK).
Yes, I know it seems archaic to require that the instantiation of a program be restricted to a specific physical environment. So, I won't do that. In fact, a significant fraction, if not the vast majority, of networked entities may take place in distant places of work or play, just as human attention may be focused not on one's own organic brain but on a distant hand Or the same on the tool. so what? The soul core of a program, the purpose is similar to the driver's license in your wallet. It can be used to prove that you are you.
Likewise, a physically verified and vouched SK can be discovered by customer, customer or competitor AI to verify that a particular process is being performed by a valid, trusted and personalized entity. This way others (human or AI) can rest assured that they can hold the entity accountable if it is accused, prosecuted, or found guilty of bad behavior. As such, malicious entities may be held accountable through some form of due process.
What forms of due process? God, do you think I'm some kind of super creature that can weigh the gods with the balance of justice? The greatest piece of wisdom I've ever heard is from Harry in Magnum Force: "One must know one's own limitations." So, I won't go further into the court process or the law enforcement process.
My goal is to create an arena where AI entities can hold each other to account in the same way human lawyers do today. The best way to avoid artificial intelligence controlling humans is to let artificial intelligence control each other.
Whether Huntington's proposed central agency or a loosely accountable one seems more feasible, the need is increasingly pressing. As tech writer Pat Scannell points out, with every passing hour, new attack vectors are created that threaten not only technology used for legal identity, but governance, Business processes and end users (whether human or robotic).
What if a cyber entity is operating below a certain set level? We can claim that they are vouched for by some higher entity whose soul core is based on physical reality.
This approach (requiring the AI to maintain a physically addressable kernel location in specific hardware memory) can also be flawed. Even though regulation is slow or has a free-rider problem, it is still enforceable. Because humans, institutions, and friendly AI can verify the ID kernel and refuse to transact with those who are not verified.
Such denials can spread faster than agency adjustments or enforcement regulations. Any entity that loses SK, will have to find another host that has gained public trust, or provide a new, modified, better-looking version, or become outlaws and never be allowed on decent people Congregated streets or neighborhoods emerge.
**Last question: Why would artificial intelligence be willing to supervise each other? **
First, none of the three old standard assumptions, as Vinton Cerf has pointed out, can confer citizenship on AI. think about it. We cannot assign "voting rights" or rights to any entity that is tightly controlled by Wall Street banks or national governments, nor to some supreme Skynet. Tell me, how will voting democracy work for entities that can flow, split, and replicate anywhere? However, in a limited number of cases, personalization may offer a viable solution.
Again, the key I'm looking for from personalization is not to have all AI entities governed by some central authority. Instead, I want these new kinds of ultrabrains to be encouraged, empowered, and empowered to hold each other accountable, just as humans are doing. By sniffing each other's actions and plans, they are motivated to report or condemn when they spot something bad. This definition may be adjusted with the times, but at least it will maintain the organic biological human input.
In particular, they would have an incentive to denounce entities that refuse to provide proper identification.
If the right incentives are in place (e.g., giving whistleblowers more memory or processing power when something bad is prevented), then this accountability race will continue even as AI entities get smarter effect. At this point, no bureaucracy can do it. Different artificial intelligences are always evenly matched.
Most importantly, perhaps those super-genius programs will realize that it is also in their own best interest to maintain a competitive accountability system. After all, such a system has brought about a creative human civilization and avoided social chaos and despotism. This kind of creativity is enough to create fantastic new species, such as artificial intelligence.
Okay, that's all I have to say, no empty or panic calls, no real agenda, neither optimism nor pessimism, just one suggestion, and that is: hold AIs accountable and check each other like humans. This approach has brought about human civilization, and I believe it can also balance the field of artificial intelligence.
This is not preaching, nor is it some kind of "moral code" that super-entities can easily override, in the same way that human marauders always turn a blind eye to Leviticus or Hammurabi. What we offer is an approach to enlightenment, inspiring the brightest members of civilization to police each other on our behalf.
**I don't know if this will do the trick, but it's probably the only way that will work. **
This article is adapted from David Brin's ongoing non-fiction novel, Soul on AI.