Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
I texted Dan earlier and he said the site is down and they’re working on it. That’s all I know. All I get is a blank totally white screen
Hmmmm. Looks like chat site is down after u posted this. Can u get in?
NEWS RELEASE 13-NOV-2023
Twisted magnets make brain-inspired computing more adaptable
A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL and Imperial College London researchers.
Peer-Reviewed Publication
UNIVERSITY COLLEGE LONDON
Neuromorphic computing
IMAGE:
AN ARTISTIC REPRESENTATION OF CONNECTED MAGNETIC SKYRMIONS AS A COMPUTATIONAL MEDIUM FOR BRAIN-INSPIRED, RESERVOIR COMPUTING.
view more
CREDIT: DR OSCAR LEE
A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL and Imperial College London researchers.
In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.
Such an approach, known as physical reservoir computing, has until now been limited due to its lack of reconfigurability. This is because a material’s physical properties may allow it to excel at a certain subset of computing tasks but not others.
In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.
Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.
“The next step is to identify materials and device architectures that are commercially viable and scalable.”
Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide.
Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aims to remove the need for distinct memory and processing units, facilitating more efficient ways to process data. In addition to being a more sustainable alternative to conventional computing, physical reservoir computing could be integrated into existing circuitry to provide additional capabilities that are also energy efficient.
In the study, involving researchers in Japan and Germany, the team used a vector network analyser to determine the energy absorption of chiral magnets at different magnetic field strengths and temperatures ranging from -269 °C to room temperature.
They found that different magnetic phases of chiral magnets excelled at different types of computing task. The skyrmion phase, where magnetised particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification – for instance, identifying if an animal is a cat or dog.
Co-author Dr Jack Gartside, of Imperial College London, said: “Our collaborators at UCL in the group of Professor Hidekazu Kurebayashi recently identified a promising set of materials for powering unconventional computing. These materials are special as they can support an especially rich and varied range of magnetic textures. Working with the lead author Dr Oscar Lee, the Imperial College London group [led by Dr Gartside, Kilian Stenning and Professor Will Branford] designed a neuromorphic computing architecture to leverage the complex material properties to match the demands of a diverse set of challenging tasks. This gave great results, and showed how reconfiguring physical phases can directly tailor neuromorphic computing performance."
The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).
Let’s all hope Len and his men have something like that cooking . Joe Noonan retired from IBM. Some on chat want him to help out Vcsy. I heard Luiz is commenting on his LinkedIn page. He’s said just Now Solutions could be worth hundreds of millions. $$
Seems to me everyone is infringing and going around us or else we have sold a lot of new licenses? I hope a lot of letters don’t have to be sent out. The ole patent troll moniker sucks. We need our own products. A J V is the only way imo
Ya think Ploinks is involved? Short range comms. Secure P to P. Sounds like it. Us and SBV are friends with Broadcom
Hau’oli Makahiki Hou !
Maybe when Len steps aside Dr Cambou will be our new CEO?
From SBV WEBSITE
Bertrand Cambou
Technical Advisor / Principle Investigator
Professor Cambou’s primary research interests within Northern Arizona University are in cyber-security through the application of microelectronics to strengthen hardware security. This includes the design of novel secure elements, Physically Unclonable Functions (PUF), True Random Generators (TRNG), and the usage of nanotechnologies such as ReRAM. He worked in the pioneering smartcard/secure microcontroller industry at Gemplus (now Gemalto), and in the POS/secure payment industry at Ingenico. He served 15 years at Motorola Semiconductor (now NXP-Freescale) in multiple capacities including CTO. Dr. Cambou was named “Distinguished Innovator” and scientific advisor of the BOD. In recent years he worked as CEO in Silicon Valley in the high-tech industry where his organization won a contract with IARPA with applications related to quantum cryptography. He is the author and co-author of 42 patents in microelectronics and cybersecurity.
PhD, Electronics, Paris-South (XI) University
Professional Engineering Degree, Electronics, Supelec Paris
Maitrise degree, Physics, Toulouse III University
In recent years he worked as CEO in Silicon Valley in the high-tech industry where his organization won a contract with IARPA with applications related to quantum cryptography.
It’s pretty obvious folks VQSY. Let’s see that name change soon.
Parallel or antiparallel . The difference creates motion? Energy? Magnets? Opposite poles. Slot cars!
I rest my case
The measurement of the syndrome has the projective effect of a quantum measurement, so even if the error due to the noise was arbitrary, it can be expressed as a combination of basis operations called the error basis (which is given by the Pauli matrices and the identity)
Hmmm the error basis. Even if arbitrary Doc!
Polarization bases compensation towards advantages in satellite-based QKD without active feedback"
SBV
In summary, the DCT coefficients can be stored in an XML file as a structured representation of the frequency components of a signal. The XML file can be parsed by software programs that support XML, and can be used to store, transport, and analyze data in a variety of applications, such as audio and image processing, data compression, and machine learning 123
lol. STORE,TRANSPORT AND ANALYZE. Doc
Could that be throwing shade for. EXTRACT TRANSFORM and LOAD ? Ya think?
A blast from the past…..
TUESDAY, OCTOBER 21, 2008
NOW SOLUTIONS WINS BID TO PROVIDE EMPATH® HR & PAYROLL SOFTWARE-AS-A-SERVICE SOLUTION TO DAISHOWA MARUBENI INTERNATIONAL LTD., PEACE RIVER PULP DIVISION, IN CANADA
Fort Worth, TX, October 21, 2008 (PRNewswire)– Now Solutions, Inc. (Now Solutions) is pleased to announce that it has won the bid to provide its proprietary emPath® human resources and payroll management Software-as-a-Service (SaaS) solution to Daishowa-Marubeni International Ltd., Peace River Pulp Division, a pulp and paper manufacturer based in Canada. emPath® is an integrated Human Resources Management System (HRMS) and payroll solution that provides a low total cost of ownership and a high return on investment while enabling users to improve management of personnel and decision-making capabilities.
Commenting on this contract award, Marianne Franklin, President and CEO of Now Solutions stated, “We faced different competition than in our traditional license model. Consequently, this is a terrific win for Now Solutions, as Daishowa Marubeni represents our first client for emPath® in the Software-as-a-Service model in Canada. Now Solutions’ emPath® Software-as-a-Service offers clients the convenience of a maintenance-free human resources and payroll management solution that you would normally expect with an outsourced provider but with the added flexibility, integration and control more typical with in-house software. With this offering, we have significantly expanded our emPath® marketing opportunities.”
Now Solutions is one of the few companies certified by the Canadian Privacy Institute under the Personal Information Protection and Electronic Documents Act (PIPEDA). To be certified, both the software and the company must comply with PIPEDA.
Now Solutions’ contract award can be attributed to the flexibility and rich functionality of emPath® in the robust, Web-based SaaS solution. Because of the inherent flexibility of emPath®, Now Solutions can deliver an extremely sophisticated payroll and human resource management solution on a SaaS platform in a more cost-effective manner than other HRMS providers. The SaaS version of emPath® provides an even more cost-effective solution that offers the same powerful HRIS and Payroll features of emPath® coupled with complete data infrastructure, IT support, and a fully-serviced and integrated hosted solution.
…I wonder how many other empath accounts have been set up since Len took over as ceo? I hope. A lot.
What is a distributed database?
A distributed database is a database that runs and stores data across multiple computers, as opposed to doing everything on a single machine.
Typically, distributed databases operate on two or more interconnected servers on a computer network. Each location where a version of the database is running is often called an instance or a node.
A distributed database, for example, might have instances running in New York, Ohio, and California. Or it might have instances running on three separate machines in New York. A traditional single-instance database, in contrast, only runs in a single location on a single machine.
What is a distributed database used for?
There are different types of distributed databases and different distributed database configuration options, but in general distributed databases offer several advantages over traditional, single-instance databases:
Distributing the database increases resilience and reduces risk. If a single-instance database goes offline (due to a power outage, machine failure, scheduled maintenance, or anything else) all of the application services that rely on it will go offline, too. Distributed databases, in contrast, are typically configured with replicas of the same data across multiple instances, so if one instance goes offline, other instances can pick up the slack, allowing the application to continue operating.
Different distributed database types and configurations handle outages differently, but in general almost any distributed database should be able to handle outages better than a single-instance database.
For this reason, distributed databases are an increasingly popular choice, particularly for mission-critical workloads and any data that needs to remain available at all times.
Distributed databases are generally easier to scale. As an application grows to serve more users, the storage and computing requirements for the database will increase over time — and not always at a predictable rate.
Trying to keep up with this growth when using a single-instance database is difficult – you either have to pay for more than you need so that your database has “room to grow” in terms of storage and computing power, or you have to navigate regular hardware upgrades and migrations to ensure the database instance is always running on a machine that’s capable of handling the current load.
Distributed databases, in contrast, can scale horizontally simply by adding an additional instance or node. In some cases, this process is manual (although it can be scripted), and in the case of serverless databases it is entirely automated. In almost all cases, the process of scaling a distributed database up and down is more straightforward than trying to do the same with a single-instance database.
Distributing the database can improve performance. Depending on how it is configured, a distributed database may be able to operate more efficiently than a single-instance database because it can spread the computing workload between multiple instances rather than being bottlenecked by having to perform all reads and writes on the same machine.
Geographically distributing the database can reduce latency. Although not all distributed databases support multi-region deployments, those that do can also improve application performance for users by reducing latency. When data can be located on a database instance that is geographically close to the user who is requesting it, that user will likely have a lower-latency application experience than a user whose application needs to pull data from a database instance that’s (for example) on the other side of the globe.
Depending on the specific type, configuration, and deployment choices an organization makes, there may be additional benefits to using a distributed database. Let’s look at some of the options that are available when it comes to distributed databases.
Types of distributed databases: NoSQL vs. distributed SQL databases
Broadly, there are two types of distributed databases: NoSQL and distributed SQL. (Document-based and key-value are two other terms often used to describe NoSQL databases, so you may sometimes see these options compared as “document based vs. relational,” for example).
To understand the difference between them, it’s helpful to take a quick dive into the history of databases.
Humans have been storing data in various formats for millennia, of course, but the modern era of computerized databases really began with Edgar F. Codd and the invention of the relational (SQL) database. Relational databases store data in tables and enforce rules – called schema – about what types of data can be stored where, and how the data relate to each other.
Relational databases and SQL, the programming language used to configure and query them, caught on in the 1970s and quickly became the default database type for virtually all computerized data storage. Transactional applications, in particular, quickly came to rely on relational databases for their ability to support ACID transactional guarantees – in essence, to ensure that transactions are processed correctly, can’t interfere with each other, and remain true once they’re committed even if the database subsequently goes offline.
After the explosion of the internet, though, it became clear that there were limitations to the traditional relational database. In particular, it wasn’t easy to scale, it wasn’t built to function well in cloud environments, and distributing it across multiple instances required complex, manual work called sharding.
[ BLOG ]
What is distributed SQL? An evolution of the database
read blog ?
In part as a response to this, a new class of databases called NoSQL databases arose. These databases were built to be cloud-native, resilient, and horizontally scalable. But to accomplish those goals, they sacrificed the strict schema enforcement and ACID guarantees offered by traditional relational databases, storing data in a less structured format. At scale, NoSQL databases have appealing advantages over traditional relational databases, but particularly for transactional workloads, they also require making compromises when it comes to data consistency and correctness.
In recent years, a new class of relational database – “new SQL”, a.k.a. the distributed SQL database – has emerged, aiming to offer a best-of-both-worlds option. Distributed SQL provides the cloud-native scaling and resilience of NoSQL databases and the strict schema, consistency, and ACID guarantees of traditional relational databases.
Unlike traditional relational databases, distributed SQL databases don’t require manual work to distribute and scale. But they can still offer ACID guarantees, making them a highly appealing prospect for any organization with important transactional workloads.
Today, both NoSQL and distributed SQL databases are widely used, and many organizations use both types. Broadly speaking, NoSQL databases are common choices for analytics and big data workloads, while distributed SQL databases are used for transactional workloads and other applications such as system-of-record stores where data consistency can’t be sacrificed for availability and scale. For this reason, a distributed SQL database may sometimes be called a distributed transactional database.
Distributed database configurations: active-passive vs. active-active vs. multi-active
One of the main goals of a distributed database is high availability: making sure the database and all of the data it contains are available at all times. But when a database is distributed, its data is replicated across multiple physical instances, and there are several different ways to approach configuring those replicas.
Active-passive
The first, and simplest, is an active-passive configuration. In an active-passive configuration, all traffic is routed to a single “active” replica, and then copied to the other replicas for backup.
In a three-node deployment, for example, all data might be written to an active replica on node 1 and then subsequently copied to passive replicas on nodes 2 and 3.
This approach is straightforward, but it does introduce potential problems. In addition to the performance bottleneck that routing all reads and writes to a specific replica can present, problems can also arise depending on how new data is written to the passive “follower” replicas:
If the data is replicated synchronously (immediately) and writing to one of the “follower” replicas fails, then you must either sacrifice availability (the database will become unavailable unless all three replicas are online) or consistency (the database may have replicas with conflicting data, as an update can be written two the active replica but fail to write to one of the passive follower replicas.
If the data is replicated asynchronously, there’s no way to guarantee that data makes it to the passive follower replicas (one could be online when the data is written to the active replica but go offline when the data is subsequently replicated to the passive followers). This introduces the possibility of inconsistencies and even potentially data loss.
In summary, active-passive systems offer one of the most straightforward configuration options – particularly if you’re trying to manually adapt a traditional relational database for a distributed deployment. But they also introduce risks and trade-offs that can impact database availability and consistency.
Active-active
In active-active configurations, there are multiple active replicas, and traffic is routed to all of them. This reduces the potential impact of a replica being offline, since other replicas will handle the traffic automatically.
However, active-active setups are much more difficult to configure for most workloads, and it is still possible for consistency issues to arise if an outage happens at the wrong time.
For example, imagine an active-active system with replicas A and B:
A receives a write for key xyz with the value 123, and then immediately fails and goes offline. A subsequent read for xyz is thus routed to B, and returns NULL, because xyz = 123 hadn’t yet been copied to B when A went offline. The application, seeing that there isn’t a current value for xyz, sends an xyz = 456 write to B. A comes back online.
At the end of this sequence, we have an inconsistency: A says xyz = 123 and B says xyz = 456. While such a scenario is not common, inconsistencies like this one have the potential to cause a lot of trouble when they do happen, so active-active setups must be configured and tested very carefully to attempt to mitigate this risk.
Multi-active
Multi-active is the system for availability used by CockroachDB, which attempts to offer a better alternative to active-passive and active-active configurations.
Like active-active configurations, all replicas can handle both reads and writes in a multi-active system. But unlike active-active, multi-active systems eliminate the possibility of inconsistencies by using a consensus replication system, where writes are only committed when a majority of replicas confirm they’ve received the write.
RELATED
Distributed transactions: What, why, and how to build a distributed transactional application
A majority of replicas thus define what is correct, allowing the database to remain both online and consistent even if some replicas are offline at the time of writing. If a majority of replicas are offline, the entire database becomes unavailable to prevent the introduction of inconsistent data.
Distributed databases vs. cloud databases
Since we’re discussing configuration options for distributed databases, it’s worth pointing out that although the terms distributed database and cloud database are sometimes used interchangeably, they’re not necessarily the same thing.
A distributed database is any database that’s distributed across multiple instances. Often, these instances are deployed to a public cloud provider such as AWS, GCP, or Azure, but they don’t have to be. Distributed databases can also be deployed on-premises, and some even support hybrid cloud and multi-cloud deployments.
A cloud database is any database that’s been deployed in the cloud (generally a public cloud such as AWS, GCP, or Azure), whether it’s a traditional single-instance deployment or a distributed deployment.
In other words, a distributed database might be run in the cloud, but it doesn’t have to be. Similarly, a cloud database might be distributed, but it doesn’t have to be.
Pros and cons of distributed databases
We’ve already discussed the pros of distributed databases earlier in this article, but to quickly review, the reasons to use a distributed database are generally:
High availability (data is replicated so that the database remains online even if a machine goes down).
High scalability (they can be easily scaled horizontally by adding instances/nodes)
Improved performance (depending on type, configuration, and workload)
Reduced latency (for distributed databases that support multi-region deployments).
Beyond those, specific distributed databases may offer additional appealing features. CockroachDB, for example, allows applications to treat the database as though it were a single-instance deployment, making it simpler to work with from a developer perspective. It also offers CDC changefeeds to facilitate its use within event-driven applications.
The cons of distributed databases also vary based on the specifics of the database’s type, configuration, and the workloads it’ll be handling. In general, though, potential downsides to a distributed database may include:
Increased operational complexity. Deploying, configuring, managing, and optimizing a distributed database can be more complex than working with a single-instance DB. However, many distributed databases offer managed DBaaS deployment options that can deal with the operational work for you.
Increased learning curve. Distributed databases work differently and it typically requires some time for teams to adapt to the new set of best practices. In the case of NoSQL databases, there may also be a learning curve for developers who aren’t familiar with the language, as some popular NoSQL databases use proprietary query languages. (Distributed SQL databases, on the other hand, use a language that most developers are already familiar with: SQL).
Beyond these factors, though, there are a variety of additional factors that must be assessed on a case-by-case basis.
Cost, for example, is a significant factor for most organizations, but it’s not possible to say that a distributed database is cheaper or more expensive – it depends on the database you pick, how you choose to deploy it, the workload requirements, how it’s configured, etc.
In principle, a distributed database might sound more expensive, as it runs on multiple instances rather than a single one. In practice, though, they can often be cheaper – especially when you factor in the cost of your database becoming unavailable. For large companies dealing with thousands of transactions per minute, even a few minutes of downtime can result in losses in the millions of dollars.
Similarly, managed DBaaS deployment options can look more expensive than self-hosted options at first, but they also significantly reduce the operational workload that has to be carried by your own team, which can make them the cheaper option.
For this reason, companies typically spend significant amounts of time and money testing and evaluating their database options, to determine what’s the best option for their specific budget and their specific requirements.
How a distributed database works
Distributed databases are quite complicated, and entire books could be written about how they work. That level of detail is outside the scope of this article, but we will take a look at how one distributed SQL database, CockroachDB, works at a high level.
From the perspective of your application, CockroachDB works very similarly to a single Postgres instance – you connect and send data to it in precisely the same way. But when the data reaches the database, CockroachDB automatically replicates and distributes it across three or more nodes (individual instances of CockroachDB).
To understand how this occurs, let’s focus on what happens to a single range – a chunk of data – when it’s written to the database. For the purposes of simplicity, we’ll use the example of a three-node, single-region cluster, although CockroachDB can support multi-region deployments and many, many more nodes.
In our example, when the data in a range is sent to the database, it is written into three replicas – three copies of the data, one on each node. One of the three nodes is automatically designated the leaseholder for this range, meaning that it coordinates read and write requests relating to the data in that range. But any node can receive requests, distinguishing CockroachDB from active-passive systems in which requests must pass through the central “active” node.
Consistency between the replicas on each node is maintained using the Raft consensus algorithm, which ensures that a majority of replicas agree on the correctness of data being entered before a write is committed. This is how CockroachDB achieves its multi-active designation – like an active-active system, all nodes can receive read and write requests, but unlike an active-active system, there is no risk of consistency problems arising.
Of course, in practice it’s all a bit more complex than that makes it sound! For a full accounting of how CockroachDB works, this architecture guide is a good starting point.
distributed SQL
distributed database
The acceleration of new experiences into digital channels is driving the creation of modern distributed apps as digital services. The speed of innovation has now reached a fever pitch, with new apps increasingly built from scratch—with very low barriers to entry. But how can you ensure your database is equally as modern to power these applications?
In this eBook, you’ll discover what defines a distributed SQL database, why it matters to modern app development, distributed SQL architecture fundamentals, as well as real-world industry use cases. You’ll also explore:
How transactional data has evolved over time
Why distributed SQL databases are the next evolutionary phase in the journey
The most important factors to consider when evaluating a distributed SQL database
An advanced database solution that can grow with your business
By the end of this guide, you’ll have the knowledge and confidence to take the next step in your cloud-native journey. You’ll also be able to pinpoint precisely what makes distributed SQL such an ideal modern distributed database solution for transactional data.
The ‘521 Patent essentially protects our XML Agent/XML Broker/XML Portal technologies under our Emily™ product line. This patent allows multiple, distributed databases to be combined and viewed in a single location as if it was a single database. These products can be used as an alternative to using Web Services.
What’s just that one worth?
I’m sure Len and Luiz are on it. In fact I told Luiz to check on Netflix a little earlier , over on chat 😎🤙
Well that pretty much sums it up and covers ALL the bases don’t ya think Doc. Is it any wonder Msft HP Samsung and LG etc etc fought so hard? And to think we came out smelling like a rose after the Markman hearing and even added several new claims.
ar·bi·trar·y
/'ärb??trere/
Learn to pronounce
adjective
based on random choice or personal whim, rather than any reason or system.
"his mealtimes were entirely arbitrary"
Similar:
capricious
whimsical
random
chance
erratic
unpredictable
inconsistent
wild
hit-or-miss
haphazard
casual
unmotivated
motiveless
unreasoned
unreasonable
unsupported
irrational
illogical
groundless
unjustifiable
unjustified
wanton
discretionary
personal
subjective
discretional
Opposite:
rational
reasoned
(of power or a ruling body) unrestrained and autocratic in the use of authority.
"arbitrary rule by King and bishops has been made impossible"
Similar:
despotic
tyrannical
tyrannous
peremptory
summary
autocratic
dictatorial
authoritarian
draconian
autarchic
antidemocratic
oppressive
repressive
undemocratic
illiberal
imperious
domineering
high-handed
absolute
uncontrolled
unlimited
unrestrained
Opposite:
democratic
accountable
MATHEMATICS
(of a constant or other quantity) of unspecified value.
From open text developer
acezone
acezone
July 6, 2012 #3
Is it possible to maintain some DCT templates in a separate XML files? I want to ensure that we can just change those templates without touching templating.cfg.
Follow the process below:
1. Create another XML file for maintaining "categories/data-type".
2. Upload / edit newly created XML file under "
3. Edit and make the following entry in templating.cfg
a.
b. Add "&give_any_name;" before closing of tag
Remember: Any time you make change in XML file it is mandate to either touch / edit templating.cfg, so that necessary changes can take place on server.
Regards,
Ace
Doc. Lol
In summary, the DCT coefficients can be stored in an XML file as a structured representation of the frequency components of a signal. The XML file can be parsed by software programs that support XML, and can be used to store, transport, and analyze data in a variety of applications, such as audio and image processing, data compression, and machine learning 123
Key word imo “ parsed “.
Everyone and his brother will be using this, and our patents will speed up this process as well as enhance it. “ store , transport , analyze “. Sound familiar ? Lol
Not sure. Wade could have made it up? I wouldn’t put anything past him. Not much longer til we find out everything. AI and ML catching up fast now. Quantum taking off like wild fire! New discoveries every day. Hard to keep up. It will be interesting to see in which direction Len and our board take us. I’m thinking the licensing route on our patents and a jv on plonks . Go VQSY !!!
Our peer to peer social network patent is way more valuable than most of you probably think. I hope to see a bidding war soon.
Doc. See SBV and Lilly Pond Dr ? /consultant for them at Az St
Damn Doc. You got me dusting off my old Kahlil Gibran books. Lol
Yesterday is but today's memory, and tomorrow is today's dream
So were you getting at a correlation between the Breit- Wheeler process and quantum computing? Entanglement and superpositions? Pairs of protons, pure light energy is transferred into matter. And then data? That would be awesome!
That wouldn’t surprise me. Lol. Nothing would surprise me anymore. I have a funny feeling there will be a lot more twists and turns after February. But I hope not. We’ve all been through enough.
Doc. A poster on chat site said our lawyer Pete told them nothing is in Wade’s trust other than his shares. jfyi. But I think that is good news. At least now we know .
Theory of Supercurrent in Superconductors
Hiroyasu Koizumi, Alto Ishikawa
Download PDF
In the standard theory of superconductivity, the origin of superconductivity is the electron-pairing. The induced current by a magnetic field is calculated by the linear response to the vector potential, and the supercurrent is identified as the dissipationless flow of the paired-electrons, while single electrons flow with dissipation. This supercurrent description suffers from the following serious problems: 1) it contradicts the reversible superconducting-normal phase transition in a magnetic field observed in type I superconductors; 2) the gauge invariance of the supercurrent induced by a magnetic field requires the breakdown of the global U(1) gauge invariance, or the non-conservation of the particle number; 3) the explanation of the ac Josephson effect is based on the boundary condition that is different from the real experimental one.
We will show that above problems are resolved if the supercurrent is attributed to the collective mode arising from the Berry connection for many-body wave functions. The problem 1) is resolved by attributing the appearance and disappearance of the supercurrent to the abrupt appearance and disappearance of topologically-protected loop currents produced by the Berry connection; the problem 2) is resolved by assigning the non-conserved number to that for the particle number participating in the collective mode produced by the Berry connection; and the problem 3) is resolved by identifying the relevant phase in the Josephson effect is that arising from the Berry connection, and using the modified Bogoliubov transformation that conserves the particle number.
Fourier transform
OverviewFunctionIntroductionPractice problemsVideosTypes
upload.wikimedia.org/wikipedia/commons/thumb/5/51/...
The Fourier Transform- Part I
Fourier Transform (FT) - Questions and Answers ?in MRI
Fourier Transforms and Theorems
More Properties of the Fourier Transform
Fast Fourier Transform Fundamentals | Advanced PCB Design Blog ...
Fourier Transform
Introduction to Fourier Transform
FFT (Fast Fourier Transform) Waveform Analysis
How to Calculate the Fourier Transform of a Function: 14 Steps
Fast Fourier Transformation FFT
View all
In physics, engineering and mathematics, the Fourier transform is an integral transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. Wikipedia
Converting data from its original, raw format into structures optimized for analytics can be a challenge.
However, doing so successfully will offer a breadth of valuable information that can allow your business to implement new, innovative services.
ETL is often used both for referring to a piece of technology for moving and transforming data as well as the actual task of getting data from the source to the target, typically an analytic database, data warehouse or a data lake.
Anyone else feel like the light in the kitchen was just flicked on and the cockroaches are scattering?
Make you wonder if new “updated “ restraining orders need to be implemented?
In summary, VMware’s licensing model has shifted toward subscriptions, and perpetual licenses are no longer available for purchase. Existing perpetual license holders can continue using them, but they won’t receive support once their SnS terms expire1 …
Yep Doc. The Trust is about to be revealed lol. subscription only from now on . No more support for perpetual licenses. Imagine that! Why? lol. The Mountain Reservoir has apparently sprung a leak .
I hope so. I have a feeling there are a lot of new licenses and old agreements being revised and updated. EVERYBODY GETS A CAR! LOL
You could be right on that. We wait.
A perfect example of who we can bundle our patents and em Path to. Verizon Give em a deal Lenny and get the ball rolling.