Aug. 17, 2012 – University of Utah computer scientist Cem Yuksel and colleagues have developed a computer model to simulate knitting with yarn. This technique provides animators with a practical route to modeling knitted garments.
“Realistic simulations of knitted fabrics have been highly labor intensive to prepare. Our modeling technique is the first method in computer graphics that permits fast and easy modeling of yarn-level knitted garments with various knitting patterns,” says study leader Yuksel, an assistant professor in the School of Computing.
“The next step in this research is deriving more efficient simulation techniques that will allow interactivity, which can open up new possibilities for textile designers as well as computer graphics artists,” he adds
Yuksel joined the University of Utah this summer. His research focuses on computer graphics, ranging from modeling physical systems to real-time and offline rendering techniques. A press release issued by Yuksel’s former institution, Cornell University, is below:
CORNELL UNIVERSITY NEWS RELEASE
Computer-simulated knitting goes right down to the yarn
To put clothes on their characters, computer graphic artists usually simulate cloth by creating a thin sheet, then adding some sort of texture. But that doesn’t work for cable-knit sweaters. To make the image realistic, the computer has to simulate the surface right down to the intricate intertwining of yarn. So computer scientists are, in effect, teaching the computer to knit.
A method for building simulated knitted fabric out of an array of individual stitches was reported at the 39th International Conference and Exhibition of Computer Graphics and Interactive Techniques (SIGGRAPH), Aug. 2-9 in Los Angeles, by Cem Yuksel of the University of Utah, Jonathan Kaldor of Facebook, and Steve Marschner and Doug James, Cornell University associate professors of computer science, The work was done when Yuksel was a postdoctoral researcher at Cornell and Kaldor a Ph.D. candidate.
In knitting, a single stitch is formed by pulling yarn through a loop. Rows of stitches, built on the loops formed by previous rows, make up the finished garment. The yarn can be pulled through in a variety of ways or multiple times, creating various shapes and textures. To simulate this realistically, a computer graphic artist would have to painstakingly model the 3-D structure of every stitch.
The Cornell innovation is to create a 3-D model of a single stitch and then combine multiple copies into a mesh, like tiles in a mosaic. The computer projects the mesh onto a model of the desired shape of the garment, treating each stitch as a tiny flat polygon that stretches and bends to fit the 3-D surface. Then it “relaxes” the graphic image of each stitch to fit the shape of its polygon, just as real yarn would stretch and bend to fit the shape of the wearer.
“We are actually changing the shape of the yarn loops that make up the stitches,” Marschner explained, “simulating how they wrap around other loops.” The result is a simulation with detail down to the yarn level. The trickiest part, Marschner said, is to make sure the images of yarn loops don’t slide through each other like ghosts. That would cause the simulation to “unravel” like a dropped stitch in real knitting.
The researchers tested their method with several patterns from knitting books and created images of dresses, sweaters, a shawl and a tea cozy. The simulations are highly realistic, but the researchers noted that the results of knitting a particular pattern depend on the yarn and needles used, as well as the style of the individual knitter. The method has some parameters that can be adjusted to simulate the effects of different needles or yarn, or different yarn tension used by the knitter, they said.
The process is computationally intensive, requiring several hours to simulate a garment (cable stitching takes the longest). As of today it would not be practical for an interactive application such as virtual reality, Marschner said, but it would be usable for movies. An animator would set up a sequence of frames and let it run overnight.
The research was supported by the National Science Foundation, the Alfred P. Sloan Foundation, the John Simon Guggenheim Memorial Foundation and Pixar.