'Processing'에 해당되는 글 19건

  1. 2009.01.12 hello, world 4
  2. 2009.01.09 프로세싱 실행하기 2
  3. 2009.01.04 Color Selector
  4. 2008.12.27 Processing 1.0.1
  5. 2008.10.23 A Few Principles of Video Tracking
  6. 2008.10.23 Processing Tutorials
  7. 2008.10.07 Messing with P5Sunflow
  8. 2008.10.07 More 3D Ribbons
  9. 2008.09.21 Processing(프로세싱)

hello, world

|
프로세싱에 hello, world를 나타내보자.

다음과 같이 입력해본다.

println("hello, world");

그리고 command+r(mac) 또는 ctrl+r(windows)단축키를 누르거나,
Sketch 메뉴에서 Run을 클릭한다.

텍스트영역에 
hello, world라고 나타난다.

And

프로세싱 실행하기

|
프로세싱 사이트에서 프로세싱 프로그램을 다운받는다.

윈도우 버전
맥 버전
리눅스 버전

자기 OS환경에 맞는 버전을 설치한다. 자주 업데이트 되므로 항상 사이트를 방문하는것이 좋다.

<주의사항> 맥에선 자바가 기본 설치되어 있어서 상관없지만, 윈도우에서는 자바를 꼭 설치해야된다.
그리고 윈도우에서는 에러를 방지하기 위해 사용자이름을 영문으로 합니다.


프로세싱 아이콘을 클릭하면 프로세싱창이 나타난다.

And

Color Selector

|
프로세싱에서 색 선택을 할 수 있도록 컬러셀렉터를 제공한다.
매우 유용하게 쓰이나 이것을 모를 경우엔 아마도 보통 포토샵을 실행할 것 같다.ㅋㅋㅋ(본인)

Tools 메뉴에서 Color Selector을 선택하면 아래와 같은 창이 생겨난

 
And

Processing 1.0.1

|

 프로세싱 1.0.1 이 발표되었다. 11월 29일자 발표
Processing 1.0.1 released. Download here.

I just returned from “Oxfort Project: Part II” and am pleased to share this announcement:

IMG_7112 IMG_7105

Today, on November 24, 2008, we launch the 1.0 version of the Processing software. Processing is a programming language, development environment, and online community that since 2001 has promoted software literacy within the visual arts. Initially created to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing quickly developed into a tool for creating finished professional work as well.

Processing is a free, open source alternative to proprietary software tools with expensive licenses, making it accessible to schools and individual students. Its open source status encourages the community participation and collaboration that is vital to Processing’s growth. Contributors share programs, contribute code, answer questions in the discussion forum, and build libraries to extend the possibilities of the software. The Processing community has written over seventy libraries to facilitate computer vision, data visualization, music, networking, and electronics.

Students at hundreds of schools around the world use Processing for classes ranging from middle school math education to undergraduate programming courses to graduate fine arts studios.

+ At New York University’s graduate ITP program, Processing is taught alongside its sister project Arduino and PHP as part of the foundation course for 100 incoming students each year.

+ At UCLA, undergraduates in the Design | Media Arts program use Processing to learn the concepts and skills needed to imagine the next generation of web sites and video games.

+ At Lincoln Public Schools in Nebraska and the Phoenix Country Day School in Arizona, middle school teachers are experimenting with Processing to supplement traditional algebra and geometry classes.

Tens of thousands of companies, artists, designers, architects, and researchers use Processing to create an incredibly diverse range of projects.

+ Design firms such as Motion Theory provide motion graphics created with Processing for the TV commercials of companies like Nike, Budweiser, and Hewlett-Packard.

+ Bands such as R.E.M., Radiohead, and Modest Mouse have featured animation created with Processing in their music videos.

+ Publications such as the journal Nature, the New York Times, Seed, and Communications of the ACM have commissioned information graphics created with Processing.

+ The artist group HeHe used Processing to produce their award-winning Nuage Vert installation, a large-scale public visualization of pollution levels in Helsinki.

+ The University of Washington’s Applied Physics Lab used Processing to create a visualization of a coastal marine ecosystem as a part of the NSF RISE project.

+ The Armstrong Institute for Interactive Media Studies at Miami University uses Processing to build visualization tools and analyze text for digital humanities research.

The Processing software runs on the Mac, Windows, and GNU/Linux platforms. With the click of a button, it exports applets for the Web or standalone applications for Mac, Windows, and GNU/Linux. Graphics from Processing programs may also be exported as PDF, DXF, or TIFF files and many other file formats. Future Processing releases will focus on faster 3D graphics, better video playback and capture, and enhancing the development environment. Some experimental versions of Processing have been adapted to other languages such as JavaScript, ActionScript, Ruby, Python, and Scala; other adaptations bring Processing to platforms like the OpenMoko, iPhone, and OLPC XO-1.

Processing was founded by Ben Fry and Casey Reas in 2001 while both were John Maeda’s students at the MIT Media Lab. Further development has taken place at the Interaction Design Institute Ivrea, Carnegie Mellon University, and the UCLA, where Reas is chair of the Department of Design | Media Arts. Miami University, Oblong Industries, and the Rockefeller Foundation have generously contributed funding to the project.

The Cooper-Hewitt National Design Museum (a Smithsonian Institution) included Processing in its National Design Triennial. Works created with Processing were featured prominently in the Design and the Elastic Mind show at the Museum of Modern Art. Numerous design magazines, including Print, Eye, and Creativity, have highlighted the software.

For their work on Processing, Fry and Reas received the 2008 Muriel Cooper Prize from the Design Management Institute. The Processing community was awarded the 2005 Prix Ars Electronica Golden Nica award and the 2005 Interactive Design Prize from the Tokyo Type Director’s Club.

The Processing website (www.processing.org) includes tutorials, exhibitions, interviews, a complete reference, and hundreds of software examples. The Discourse forum hosts continuous community discussions and dialog with the developers.

Download images and more text about Processing:
www.processing.org/about/processing-1.0.zip

Questions and Answers:

What is new in Processing 1.0?
The most important aspect of this release is its stability. However, we have added many new features during the last few months. They include a new optimized 2D graphics engine, better integration for working with vector files, and the ability to write tools to enhance the development environment.

Who uses Processing?
Processing is used by a very diverse group of people, from children first exploring computer programming to professional artists, designers, architects, engineers, and scientists. Processing has a shallow learning curve to make writing code easier for beginners, but it also allows more experienced programmers to write sophisticated software. We’ve seen the number of people using Processing double each year for the last three years. The increased stability of the software and the publication of six related books in the last two years are the likely reasons for this increase.

What is the future of Processing?
The 1.0 version of Processing focuses on education and software sketching (prototyping). The next major release of the software will focus on professional users while retaining the simplicity that is Processing’s trademark. Specifically, future releases will increase the speed of programs that work with video and complex 3D graphics.

Books about Processing:
Fry, Ben. Visualizing Data: Exploring and Explaining Data with the Processing Environment. Sebastopol, CA: O’Reilly Media, 2008.
Greenberg, Ira. Processing: A Programming Handbook for Visual Designers and Artists. Berkeley, CA: Friends of Ed, an Apress Co, 2007.
Igoe, Tom. Making Things Talk: Practical Methods for Connecting Physical Objects. Make: projects. Sebastopol, CA: O’Reilly, 2007.
Reas, Casey, and Ben Fry. Processing: A Programming Handbook for Visual Designers and Artists Cambridge, Mass: MIT Press, 2007.
Shiffman, Daniel. Learning Processing: A Beginner’s Guide to Programming Images, Animation, and Interaction. The Morgan Kaufmann Series in Computer Graphics. Burlington, MA: Morgan Kaufmann/Elsevier, 2008.



And

A Few Principles of Video Tracking

|
The idea of tracking motion on a computer using a video camera has been around a couple of decades, and still is not fully perfect, because the construction of vision is a complex subject. We don't just "see"; we construct colors, edges, objects, depth, and other aspects of vision from the light that reaches our retinas. If you want to program a computer to see in the same way, it has to have subroutines that define the characteristics of vision and allow it to distinguish those characteristics in the array of pixels that comes from a camera. For more on that, see Visual Intelligence: How We Create What We Seeby Donald Hoffman. There are many other texts on the subject, but his is a nice popular introduction. What follows is a very brief introduction to some of the basic concepts behind computer vision and video manipulation.

There are a number of toolkits available for getting data from a camera and manipulating it. They vary from very high-level simple graphical tools to low-level tools that allow you to manipulate the pixels directly. Which one you need depends on what you want to do. Regardless of your application, the first step is always the same: you get the pixels from the camera in an array of numbers, one frame at a time, and do things with the array. Typically, your array is a list of numbers, including the location, and the relative levels or red, green, and blue light at that location.

There are a few popular applications that people tend to develop when they attach a camera to a computer:

Video manipulation takes the image from the camera, changes it somehow, and re-presents it to the viewer in changed form. In this case, the computer doesn't need to be able to interpret objects in the image, because you're basically just applying filters, not unlike Photoshop filters.

Tracking looks for a blob of pixels that's unique, perhaps the brightest blob, or the reddest blob, and tracks its location over a series of frames. Tracking can be complicated, because the brightest blob from one frame to another might not be produced by the same object.

Object recognition looks for a blob that matches a particular pattern, like a face, identifies that blob as an object, and keeps track of its location over time. Object recognition is the hardest of all three applications, because it involves both tracking and pattern recognition. If the object rotates, or if its colors shift because of a lighting change, or it gets smaller as it moves away from the camera, the computer has to be programmed to compensate. If it's not, it may fail to "see" the object, even though it's still there.

There are a number of programs available for video manipulation.Jitter, a plugin for Max/MSP, is a popular one. David Rokeby'ssoftVNS is another plugin for Max. Mark Coniglio's Isadora is a visual programming environment like Max/MSP that's dedicated to video control, optimized for live events like dance and theatre. Image/ine is similar to Isadora, though aging, as it hasn't been updated in a couple of years. There also countless VJ packages that will let you manipulate live video. In addition, most text-based programming languages have toolkits too. Danny Rozin's TrackThemColors Prodoes the job for Macromedia Director MX, as does Josh Nimoy'sMyron. Myron also works for Processing. Dan O'Sullivan's vbp does the job for Java. Dan has an excellent site on the subject as well, with many more links. He's also got a simple example for Processing on his site. Almost all of these toolkits can handle video tracking as well.

There are two methods you'll comm,only find in video tracking software: the zone approach and the blob approach. Software such as softVNS or Eric Singer's Cyclops or cv.jit (a plugin for jitter that affords video tracking) take the zone approach. They map the video image into zones, and give you information about the amount of change in each zone from frame to frame. This is useful if your camera is in a fixed location, and you want fixed zones of that trigger activity. Eric has a good example on his site in which he uses Cyclops to play virtual drums. The zone approach makes it difficult to track objects across an image, however. TrackThemColors and Myron are examples of the blob approach, in that they return information about unique blobs within the image, making it easier to track an object moving across an image.

At the most basic level, a computer can tell you a pixel's position, and its color (if you are using a color camera). From those facts, other information can be determined:

  • The brightest pixel can be determined by seeing which pixel has the highest color values;
  • A "blob" of color can be determined by choosing a starting color, setting a range of variation, and checking the neighboring pixels of a selected pixel to see if they are in the range of variation.
  • Areas of change can be determined by comparing one frame of video with a previous frame, and seeing which pixels have the most significantly different color values.
  • Areas of pattern can be followed by selecting an area to track, and continuing to search for areas that match the pattern of pixels selected. Again, a range of variation can be set to allow for "fuzziness"

A few practical principles follow from this:

Colors to be tracked need consistent lighting. The computer can't tell if my shirt is red, for example; it can tell that one pixel or a range of pixels contains the color value [255,0,0] perhaps, but if the lighting changes and my shirt appears gray because there is no red light for it to reflect, the computer will no longer "see" it as red

Shapes to be tracked need to stay somewhat consistent in shape. The computer doesn't have stereoscopic vision (two eyes that allow us to determine depth by comparing the difference in image that our two eyes receive), so it sees everything as flat. If your hand turns sideways with respect to the camera, the pattern changes because your hand appears thinner. So the computer may no longer recognize your hand as your hand.

One simple way of getting consistent tracking is to reduce the amount of information the computer has to track. For example, if the camera is equipped with an infrared filter, it will see only infrared light. This is very useful, since incandescent sources (lightbulbs with filaments) give off infrared, whereas fluorescent sources don't. Furthermore, the human body doesn't give off infrared light either. This is also useful for tracking in front of a projection, since the image from most LCD projectors contains no infrared light.

When considering where to position the camera, consider what information you want to track. For example, if you want to track a viewer's motion in two dimensions across a floor, then positioning a camera in front of the viewer may not be the best choice. Consider ways of positioning the camera overhead, or underneath the viewer.

Often it is useful to put the tracking camera behind the projection surface, and use a translucent screen, and track what changes on the surface of the screen. This way, the viewer can "draw" with light or darkness on the screen.


출처 : http://www.tigoe.net/pcomp/videoTrack.shtml

And

Processing Tutorials

|
아직 한국이나 외국에서 알차게 짜여진 프로세싱(processing) 튜토리얼은 없으나 그나마 좀 괜찮고 쉽게 접근할 수 있는 튜토리얼들을 모아봤다. 공부하는 학생이나 기술적으로 부족한 인터랙션 디자이너들에게는 도움이 되라라 생각된다. 영어지만 쉽게 이해할 수 있는 정도의 수준임으로 걱정은 할 필요가 없다. 많은 도움이 되길 바란다.

http://itp.nyu.edu/~sve204/icm_fall06/
http://itp.nyu.edu/ICM/james/
http://itp.nyu.edu/icm/shiffman/
http://www.shiffman.net/teaching/workshop/

http://www.thesystemis.com/makingThingsMove/index.html
http://thesystemis.com/eatingVideo/
http://itp.nyu.edu/~dbo3/cgi-bin/ClassWiki.cgi?ICMVideo

도움: Chris O’shea + Processing discourse

출처 : http://www.digitypo.com/blog/entry/Processing-Tutorials


And

Messing with P5Sunflow

|

Cube Explosion

Ray tracing is the CG rendering technique used in Pixar movies and most other broadcast quality CG. Basically it bounces millions of virtual photons around the scene to simulate how objects reflect light and cast shadows on each other. This produces super realistic images at the cost of being very computationally expensive.

P5Sunflow is a Processing version of the SunFlow open source Java ray tracing implementation created by Mark Chadwick.

P5Sunflow produces images with creamy shadows and a solid sculptural feel that are quite different to anything you can achieve with most real-time 3D engines. Unfortunately rendering times are really slow. The videos below are overnight renders. I’d be interested to find out if there is some kind of ‘fake’ ray tracing that produces similar results quicker.

Click through to see the HD and downloadable QuickTime versions. These work well looped in QT.


Cube Wall from felixturner on Vimeo.


Sunflow Phase Towers from felixturner on Vimeo.

You can download the Processing sketch for the cube wall animation here. To use it you need to install the P5Sunflow library as described here. To run P5Sunflow you need to use the version of Processing that comes without Java, since it requires Java v1.5 and Processing comes with Java1.4.

And

More 3D Ribbons

|

ribbons!

If you follow this blog you’ll know I’ve been obsessed with 3D ribbons for a while now. I ported my AS ribbon code to Processing and I’m very happy with how it turned out. It’s refreshing to not have to worry about frame rates, since processing’s 3D performance is so good

Here’s the processing sketch and source code. You will need a fast machine with OpenGL support to run the sketch - basically don’t blame me if it crashes your browser ;).

[UPDATE - it seems OpenGL sketches don't like to run off the web. I uploaded a P3D version here that should be more compatible. Since it's not using additive blending you get a strange pink color instead of the nice glowy white...]


Untitled from felixturner on Vimeo.

It would be nice to get some realtime glow/blur in here to smooth things out a bit. I think this library may do the trick but it could take me a while to figure it out.

And

Processing(프로세싱)

|



프로세싱 언어는 정교한 비주얼과 개념적 구조의 창조를 쉽게 하기 위해 제작되었다. 프로그램 이미지, 애니메이션 그리고 인터렉션을 하고자하는 사람들을 위해 만들어진 오픈소스 프로그래밍 언어 환경으로서, 주로 학생, 아티스트, 디자이너, 연구원 및 프로토타이핑이나 프로덕션 분야에 관심있는 사람들에의해 사용되고있다. 이것은 비쥬얼 컨텍트와 컴퓨터 언어의 기초를 가르치고 소프트웨어 드로잉과 전문 프로덕션툴을 위해 만들어 졌다. 많은 드로잉 관련 명령어를 기본적으로 내장하고 있어서 자바나 C++등 다른 언어보다 쉽게 익힐 수 있는 프로그램 언어이다.

 나비 아카데미 2008년도 여름 워크샵 “.MOV”의 첫 번째 시리즈인 아티스트를 위한 컴퓨터 언어의 이해는 오픈소스 개발환경인 프로세싱을 사용하여 미디어 아티스트, 디자이너 등 비주얼 작업을 하는 사람들을 대상으로 상업적 소프트웨어의 사용에서 오는 한계를 벗어나 개개인의 창의적 발상의 확장을 시도하는 수업을 생각한다.

컴퓨터 언어를 사용해서 직접 제작해보는 이 수업은 최종 아웃풋의 매체가 영상이 되는 작업에 중점을 두고 프로그래밍의 다른 접근을 시도해 본다. 프로그래밍을 통한 1차 소스로써의 영상 제작, 그리고 실시간 영상의 이미지 프로세싱 및 제작된 영상의 변환 등 영상 매체의 새로운 접근을 접한다.


참고 교재:

Casey Reas and Ben Fry

“Processing: a programming handbook for visual designers and artists”

John Maeda ‘The Laws of Simplicity’

John Maeda ‘Creative Code’

John Maeda ‘ ‘Maeda at Media’

Ben Fry  ‘visualizing data’

 

참고 사이트: http://www.processing.org

              http://www.shiffman.net/teaching/icm

 http://workshop.evolutionzone.com/

And
prev | 1 | 2 | next