Implementation plan

Technology+Financial Solution Provider: Shenzhen Oriental Citigroup Holdings Group
When the perception ability of a city, as well as the computing power represented by restoration, modeling, rendering, and simulation, reaches a certain critical value, large-scale digital twins at the city level become possible. Building city level digital twins will open up a new computing paradigm in the urban field. The urban digital twin world is not only a visual rendering of the city, but also a new carrier for many urban businesses, providing support for business innovation. Here, urban digital twins not only accurately capture the present of the city, but also comprehensively record the history of the city. More importantly, by integrating multi domain simulation capabilities and supporting cloud native simulation transformation based on cloud computing, urban digital twins can conduct large-scale joint simulation and deduction calculations. On one platform, multi domain and multi precision simulations can be conducted simultaneously and play against each other, thereby inferring the potential for future development of the city from its past.
——- Zeng Zhenyu
Vice President of Alibaba Cloud Intelligence and Head of Industry Solution R&D Department

Background of Urban AI Metaverse

Since its first proposal in 2017, urban digital twin has been widely promoted and recognized as a new method for refined urban governance. In recent years, the key technology of urban digital twin has achieved breakthroughs from quantity to quality, specifically reflected in large-scale aspects, realizing large-scale dynamic perception mapping (lower modeling cost), large-scale online real-time rendering (shorter response time), and large-scale joint simulation deduction (higher accuracy). Large scale urban digital twins have made significant progress in application scenarios such as transportation management, disaster prevention and control, and dual carbon management. The digital twin of future cities will continue to evolve towards three-dimensional, unmanned, and global based on the trend of large-scale development. This will disrupt the existing network; It is the future 3D network; It will disrupt flat networks such as WeChat and Taobao.
The metaverse is a parallel world to the real world. With the help of modern technologies such as the Internet of Things, cloud computing, and big data, we can mirror the data generated by humans and connected objects into the metaverse, thereby understanding, manipulating, and simulating the real world in new ways. The metaverse should have the following six characteristics: permanence, real-time, unrestricted access, economic functionality, connectivity, and creativity. The construction of the metaverse involves seven levels: experience, discovery, creator economy, spatial computing, decentralization, human-computer interaction, infrastructure, and is implemented through the 'Community Intelligent Media Service Terminal Platform'.
The metauniverse is a new type of Internet application and social form that integrates a variety of new technologies; It is a virtual space parallel to and independent of the real world, an online virtual world that maps the real world, and an increasingly realistic digital virtual world.
The metaverse digital city simulation system empowers urban emergency centers. The metaverse city simulation system will focus on three major areas: safety production, disaster prevention and mitigation, and emergency rescue, simulating and preventing disasters; Guide government decision-making. Artificial intelligence driven automatically.

The Story of Wall Street Listing

Future network; Metaverse platform. 1: 1. Copy local cities; 3D version of social metaverse. The interplay between reality and virtuality guides and simulates the real world. Urban digital twin aims to target cities,
Building a 1:1 digital mapping between the digital world and the physical world, and then conducting interdisciplinary mechanisms and simulation simulations through digital mapping, and achieving real-time bidirectional synchronization with the physical world. In the past two years, key technologies for urban digital twins such as precise mapping, generative rendering, and simulation deduction have achieved breakthroughs from quantity to quality. This is reflected in large-scale dynamic perception mapping, large-scale online real-time rendering, and large-scale joint simulation deduction.
In terms of precise mapping, unlike traditional measurement and mapping methods that consume manpower, time, and financial resources, remote sensing, radar, vision, positioning, and other sensors and stock surveying data are comprehensively used to achieve real-time perception of multiple attributes such as position and status of static urban components and dynamic objects (such as people, vehicles, etc.) at a lower cost. In the future, by aggregating multi-dimensional and various types of sensor data from urban sky and ground, combined with AI perception capabilities, multi-source heterogeneous data of the same entity can be fused and extracted, and internal relationships between multiple entities can be constructed for large-scale, low-cost, unified, real-time, and accurate mapping expression in the digital world.
In terms of generating rendering, based on precise mapping data foundation, combined with technical capabilities such as AIGC (AI Generated Content) and Game World Generated PGC (Professional Generated Content), it can achieve automated generation of hierarchical, dimensional, and multi-resolution city level 3D scene models, as well as support for multiplayer online and interactive large-scale real-time rendering.
In terms of simulation deduction, multidisciplinary and large-scale mechanisms and simulation models are combined to form a "simulation mechanism metaverse" in the same digital world, constructing a virtual real interaction and bidirectional regulation mechanism. Key technologies include:
1) The simulation system is cloud native, based on cloud native supercomputing scheduling and solvers, which can significantly shorten simulation computation time and achieve real-time computing response for city level scenarios and entity scales of over one million;
2) Unified interface fusion computing, multiple mechanism models and simulation models can perform real-time fusion computing, forming a multi simulation joint service capability.
Under the combined influence of technology and demand, large-scale urban digital twins have made significant progress in application scenarios such as transportation management, disaster prevention and control, and dual carbon management. In terms of traffic management, based on high-precision 3D modeling and real-time rendering of urban road networks, water networks, rivers, vehicles, and other entities (reducing modeling costs by over 90% and shortening time from months to days), through joint simulation models of road traffic flow, urban waterlogging, autonomous driving, crowd movement, etc., twin drills and effect evaluations of comprehensive strategic plans for crowd evacuation guidance, traffic control strategies, weather conditions, public transportation supply, etc. at large-scale event sites can be achieved (achieving "1-minute activation plan" and "5-minute arrival at the scene" for emergency situations, and "1-hour evacuation" for large-scale events).
The market space for smart cities based on digital twins is very broad. According to IDC's prediction, the investment scale of smart cities will exceed 100 billion US dollars by 2025, with a compound annual growth rate of over 30% over 5 years. At present, the biggest bottleneck faced by urban digital twins is that the urban twins of large-scale object entity twins and business process twins at the city level have not yet been fully established. Urban digital twins will continue to evolve towards three-dimensional, unmanned, and global directions based on large-scale features. In the future, urban digital twins will serve as both a research and development testing environment for integrated unmanned systems (such as unmanned vehicles, drones, robots, etc.) in cities, as well as a support system for achieving global perception and scheduling.
Generative AI has achieved a breakthrough in 2022. Whether it's image generation, code generation, or open domain text generation, there have been significant improvements in the quality, logic, and security of the generated content. The application scenarios based on AI generation technology will emerge more in the coming years. However, safe, controllable, and ethically responsible generation technologies still need to be focused on research and development, and special attention should be paid to the adverse social impact caused by false generated content.
——Huang Fei (Language Technology Laboratory of Damo Institute)
Generative AI uses various machine learning algorithms to learn features from data, enabling machines to create entirely new digital content such as videos, images, text, audio, or code. The content it creates remains similar to the training data, rather than being copied. Its development has benefited from the breakthroughs in basic research, especially deep learning, of large models in recent years, the accumulation of real data, and the decrease in computational costs. In the past year, generative AI has focused the value of artificial intelligence on the word 'creation', marking the beginning of AI's ability to define and present new things.

In the past year, the progress of generative AI has mainly been reflected in the following areas:
The progress in the field of image generation comes from the application of diffusion models, represented by DALL · E-2 and Stable Diffusion. Diffusion model is a deep learning technique that generates images from noise. Behind the diffusion model technology are pre trained models that provide a more accurate understanding of human semantics, as well as the support of the Text and Image Unified Representation Model (CLIP). Its emergence has made image generation more imaginative.

The progress in the field of natural language processing (NLP) comes from ChatGPT (Generative Pre trained Transformer) based on GPT3.5. This is a text generation in-depth learning model based on Internet available data training, which is used for question answering, text summary generation, machine translation, classification, code generation and dialogue AI. Thanks to the development of pre trained large models that combine text and code, ChatGPT introduces manually annotated data and reinforcement learning (RLHF) for continuous training and optimization. After incorporating reinforcement learning, large models can understand human instructions and their underlying meanings, judge the quality of answers based on human feedback, provide interpretable answers, and give reasonable responses to inappropriate questions, forming an iterative feedback loop.

Generative AI has entered a period of explosive application, which will greatly promote digital content production and creation

The progress in the field of code generation comes from the code generation systems AlphaCode and Copilot. In February 2022, Deepmind has released their latest research result AlphaCode. It is a system that can be programmed autonomously, with over 47% of human engineers participating in programming competitions organized by Codeforces. This marks the first time that AI code generation systems have reached a competitive level in programming competitions. Copilot, trained on open-source code, has begun commercialization as a subscription service provided to developers. Users can use Copilot to automatically complete code. Copilot, as a system based on a large language model, although still requires manual secondary correction in most cases, will help developers improve work efficiency in simple and repetitive code generation, and have a significant impact on the IDE (Integrated Development Environment) industry.
With the explosive growth of content creation, how to achieve controllable quality and semantics of content, and become controllable generation, will be the main challenge faced by generative AI. In terms of industrialization, cost reduction remains a key challenge. Only when the training and inference costs of large models like ChatGPT are low enough, can they be scaled up for promotion. In addition, the security and controllability of data, copyright and trust issues also need to be addressed one by one with the acceleration of industrialization.
In the next three years, generative AI will enter the fast lane of technological productization, with more exploration in business models, and the industrial ecosystem will gradually improve with the popularization of applications. At that time, the content creation capability of generative AI will reach the level of humans. Large technology companies with data, computing power, and productization experience will become the main participants in the implementation of generative AI. The computing infrastructure and platform based on generative models will gradually develop, and models will become readily available services that customers can use without the need for professional skills in deploying and running generative models. Generative models will make significant progress in interactive capabilities, security and trustworthiness, and cognitive intelligence to assist humans in completing various creative tasks.

Investment terms and procedures

1. Established a strategic partnership between Citigroup and ZTE+Alibaba.

2. The local government invested 300 million yuan; Citibank invested 9 times in US dollars, equivalent to 2.7 billion RMB in US dollars; Establishing joint ventures with local governments; Building an artificial intelligence metaverse and industrial park;
Simultaneously initiate the process of going public in the United States; At the same time, it is possible to acquire the shell of A-shares; Citigroup Financial Holding arranges approximately $1 billion in additional underwriting terms;

3. SHOM listed company (We control dozens of US listed companies; For example, SHOM is one of them) is a declared US listed company; In OTC trading; Because the US SPAC requires Chinese business to be less than 25%; It can be listed through OTC or IPO methods; But OTC high control can create a high market value of billions of dollars; Controllability and financing capability are greater than IPO; The core lies in maintaining market value and series financing; Merge with SPAC through OTC. The possibility of achieving main board trading and large-scale financing. The target company and Citibank jointly register or acquire AIGC high-tech international IP in the United States; Citigroup Financial Holding arranges IP of American academicians or global Nobel laureates and serves as Chief Scientist; A global AIGC high-tech story is born. Citigroup Financial Holding is the Chief Strategic Architect of the Nobel Future Research Fellow worldwide; There are hundreds of Nobel laureates in the research institute. Joint acquisition of SHOM listed company by Citigroup Financial Holding; Acquiring controlling rights and the position of chairman of the board of directors;

4. The first issuance and acquisition of a US IP company; Activate listed companies; After the merger of From-10; 1: 100 share reduction; The market has around 14 million shares; Approximately $3 per share;

5. Issuing 5 billion new shares to China Industrial Park Corporation; Merge business flow and asset entry;

6. After the issuance, we hold over 90% of the shares and have a market value of 15 billion US dollars; Citigroup Financial Holding arranges a high-frequency hedge fund to underwrite and subscribe for an additional $1 billion equity issue of a listed company; Create high trading volume and liquidity.

7. Global mergers and acquisitions of game and AI high-tech companies with cash flow generate data and fundamentals. Then merge with SPAC; Directly go to NASDAQ; There is a SPAC of $500 million to $2 billion; Then arrange for PIPE to invest $100-1 billion in the company. Maintain a market value of billions of dollars. You can also upgrade directly to NASDAQ.